Bruno Serge's profile

Activision User Surveys

Responsive Surveys
Web surveys are an essential research tool for marketing, social and official statistics. According to ESOMAR - Global Association for Data and Insights, online surveys account for 20% of global data-collection expenditure.

• “Web surveys are faster, simpler, and cheaper”.
• “The entire data collection period is significantly shortened”.
• “[They] are less intrusive, and suffer less from social desirability effects”.
• “[They] can be tailored to the situation (respondents may be allowed save a partially completed form, the questionnaire may be preloaded with already available information, etc”.)
• “Online questionnaires may be improved by applying usability testing, where usability is measured with reference to the speed with which a task can be performed, the frequency of errors and user satisfaction with the interface”.


Project Goals
The validation and increase of CSAT, or any other metric for that matter, is entirely dependent on better insights. So these two are closely related.

There is a 5th goal (decrease effort), which ties to the last 2 (increase volume) in multiple ways. By decreasing the user effort to take each survey, we’ll be able to increase survey volume (both of surveys presented to users, and of actionable data we retrieve from each interaction).


Goal 1: Validate or Increase CSAT

The current web survey is pretty long, but asks 2 questions at the start (Is the website easy to use? + is the information useful?) so we can find the average, to derive what we call the Service Satisfaction score. We’ll continue asking these questions to a sample of users, indefinitely. But there’s an ever-growing wealth of knowledge on how to design better surveys, that let us use insights to improve how support is provided, and how business is done.

Going forward, we’ll expand upon our understanding of all these metrics, with shorter targeted surveys, and industry standard question formats. We’ll go beyond CSAT altogether, capturing other crucial scores - like Net Promoter and Customer Effort.
Be skeptical of things you find on the web. Checkmarket specializes in surveys - but the American Customer Satisfaction Index contradicts a lot of their claims. Turns out survey scales can be used somewhat interchangeably depending on the calculation method, and the survey layout preferred. Some companies still insist on 5 point scales for ease of completion, but results are much more imprecise. 1-10 (not 0-10) is now preferred for statistical analysis, since it’s an even number (no neutral choice).

CSAT: Customer Satisfaction | Straightforward satisfaction metric
“How good did that interaction feel to you”?

NPS: Net Promoter Score | Brand loyalty predictor
“How likely are you to recommend our brand to someone else”?

CES: Customer Effort Score | Essential for UX learnings
“How much work did it take to get this thing done”?

SSAT*: Service Satisfaction | Ease of Use + Usefulness

To summarize Goal 1 - CSAT, along with NPS and CES, will become a part of our metrics going forward. Collectively they will help validate each other, while reducing error margins. Better data means better products and services, which almost invariably achieve better CSAT and other metrics. 

A 2018 CustomerThink study found that businesses may start out using one of these metrics, but over time tend to use more and more, including composite indicators that can be proven to link to desired business outcomes. We will also continue to capture the current SSAT with similar questions, as long as it proves useful. More data is always desirable, and can only help inform future Service Metrics and KPI decisions. 

CustomerThink also found that the choice of metrics is not a major factor in CX success, but rather how the organization acts on those insights. Starting by comparing current survey experience with multiple future targeted surveys, Browsing data (Analytics) and A/B testing will be our tools to extract useful data that is actionable for the User Experience, the Player Support services, and ultimately the brand. 
Continuous iterative improvement of both design and survey scores completes a positive feedback loop of datadriven product development. Better surveys lead to better insights. Better insights lead to more actionable business decisions. This means improved products, and improved customer service.

Goal 2: Deliver Better Insights

Currently, ops data is fairly consolidated. Both Social and Live have their own Inquisium survey sets, and all the additional data from those interactions is managed from within SalesForce.

On the web, things are a bit more complicated.
• Browsing data is accessed through Analytics
• CVENT Inquisium survey data is accessed through SalesForce (similar to ops)
• A/B testing used to be achieved through SalesForce, and we’re migrating the website to AEM (with basic integrated A/B testing capabilities)
Adobe’s ecosystem offers integrated solutions to these needs. Specifically, Adobe Target works seamlessly with AEM and Analytics, to enable advanced A/B testing with AI and unlimited variations, and also helping to present contextual web surveys at the right time.

This is a possible path to an integrated solution, throughout the entire product development cycle. Informative user journey analysis features are also offered through Adobe Campaign and Audience Manager.​​​​​​​
CVENT Inquisium is our current survey solution – but the learning curve impact for Target should be minimal in the context of our transition to AEM. And the good news is, we’d leverage some of its superior technologies to deploy smarter surveys, integrate data with Analytics, and apply that information to automated, continuous experience optimization through advanced A/B testing AI.

Cvent inquisium
Pros
• Current solution for web surveys
• Used for ops surveys
Cons
• Not integrated into Adobe
• No Exit Intent technology yet

Adobe Target
Pros
• Integrates with AEM/Adobe Analytics
• Exit Intent lightbox
• Advanced A/B testing
• Can be used for A/B testing, along with Inquisium for surveys
• AI-powered personalization with Sensei*
Cons
• New product learning curve
There are some interesting resources on integrating AEM with Adobe Analytics and Adobe Target. Analytics to extract clearer conclusions from real time and survey data, and Target to quickly A/B test new design hypotheses. 
We’ll be able to estimate a rough baseline for existing and new metrics, but it would be risky to commit to an ambitious baseline with new metrics before a comfortable period of A/B testing with a large population sample size. Again, how much we learn is ultimately the most important metric, by far.

Besides what modern analytics provide, a lot of useful insights can still come from relatively rudimentary Metrics and KPI calculation, and more importantly, analysis of open-ended questions to extract meaningful suggestions out of the endless pile of cruel, post-rage-quit site reviews…

Goals 3 & 4: Decrease Effort, Increase Volume

If we were to maintain the length and design of current surveys, and increased volume of survey offerings to users, we would inevitably create more user frustration, leading to both lower completion rate, and lower scores. 
The negative effects of this frustration can also go beyond just surveys, distorting the way people see the website, and the brand as a whole. Ultimately, this also generates less actionable data, particularly within our current process, which is not optimal.​​​​​​​
The answer to 3 out of 5 goals lies in small, journey-targeted, contextual surveys. The key to a successful survey is asking the right questions at the right moment – that way, visitors are more likely to participate and share their thoughts. This also helps keep each separate survey shorter, noticeably improving completion rates.

By decreasing survey length, and tailoring questions for specific user journeys, we can drastically increase the volume of offerings, while simultaneously also increasing submissions. 
Short surveys are more transparent within the experience, therefore less upsetting to users. They are also less disruptive to data accuracy, and generate more actionable data with each submission.

Objectives & Success Criteria

The broad objective of this project is to combine Player Support learnings from the past, with the latest best practices, and propose survey designs that ask better questions, are easier to answer, and empower our business to innovate and develop cutting edge customer services and products. By doing so, we will inevitably set new standards for customer satisfaction, usability, and brand perception. The establishment of a baseline is dependent on statistics – and with a better understanding of those statistics, the baseline becomes more reliable.

Objectives
• Update survey design and logic with latest best practices
    • Provide evidence-backed UX improvements (survey scales, mobile interaction, gamification, etc.)
    • Offer surveys about specific experiences, for useful feedback and improvement opportunities
    • Understand customer intent and journey: study site locations and user populations for which different surveys are triggered (and how often each should be triggered)
    • Restructure the type and number of questions asked in each targeted survey (through research, practices, competitive analysis)
• Validate or set new target for metrics (SSAT*, CSAT, NPS, CES, Volume, etc.)
• Establish a website survey effort baseline

Success Criteria
• Decrease effort for analytics to deliver actionable feedback
• Deliver clear and concise responses
Effort baseline established
• CSAT, volume offering, conversion rate do not decrease below baseline

Competitive Analysis
​​​​​​​
Chase App
This kind of modal takeover has been used on the Activision Support site, and will continue to be used, both for the legacy web survey during A/B testing, and for the new Exit survey. The Chase App uses a nonstandard approach to what appears to be essentially a Net Promoter Score template

Autodesk
Another short 5- point CSAT survey from Autodesk, in this case made available through a permanent “Feedback” tab.

Adobe
Adobe uses a combination of the Feedback Tab pattern with a simple Helpful Yes or No question. In 3 clicks and a few characters, they had efficiently recorded my feedback. J They don’t always keep it simple in their software, but on their website they do J Adobe doesn’t even provide submission confirmation – you click submit and the window is just… gone! (Seriously though, that bit is not an example of good UX)

Research Findings

• “Surveys should exhibit word economy and item efficiency. While internal test/retest for target concepts is an appropriate approach, overall the survey should be short, easy to understand, and minimize the amount of conscious thought the respondent must use to provide answers. This will help the survey produce valid and reliable results that accurately measure the topics being tested.”
 
• Ask critical NPS or CSAT question at the very beginning. “Otherwise . . . Data can be influenced by the format of the survey.”

No popup surveys for visitors on their first page, or transactional pages (users get frustrated when they’re interrupted)
 
Don’t use more than one comment box (or free response) question – these are harder to type on mobile.
 
• “A poll on the homepage may be too invasive, [so] there are other ways to collect user feedback. For example, . . . an optional “feedback” widget . . . on the right-hand side of the page that allows users to leave quick feedback if they want, but doesn’t disrupt their journey with a pop-up.” Note: this would allow more dissatisfied users to submit feedback at will to vent their frustrations, so survey results from clicked Feedback tab should be separated from random user survey results, until we can compare them and draw the required conclusions to adjust future CSAT baseline.

• Invite rate should reach exactly the % of visitors whose data we will use in analysis. No more and no less
 
• “Consider triggering a survey only when visitors have spent 30 seconds or more on the page, or scrolled halfway down. This solution will help avoid distracting people when they are first reading the page, and also give them time to determine whether they're able to find the information they need”
 
• “Word free text questions carefully so users are able to freely give negative feedback. One way to do this is to explicitly encourage honest feedback (i.e. “What’s the main reason for your score? Please be 100% honest: we need your feedback to really improve”)”
 
Actionability is key. Businesses should avoid getting fixated on big data or response rates. Survey insights should be useful to make positive short-to-medium term changes to website design. If data can’t be acted on within ~30 days, then it’s not worth collecting, and the business case for that specific survey should be reconsidered

Current Survey​​​​​​​
The current survey uses sort of a rudimentary prediction of exit intent, and asks the user to remember a new tab. There’s plenty of opportunities for user error.
Language selection is asked, but could instead be detected or inherited for a less intrusive experience.
Required fields mean users can’t submit partial forms without answering those questions. Some users will decide to fill it out, others may give up and close the window (reducing the amount of useful information we get).
There are two open-ended questions on the same survey - not a best practice to have more than one in any given survey.

Targeted Survey Types
Targeted Survey 1: New Visitor

Randomly presented to anonymous users. Can be introduced as a corner/edge banner, less disruptive to the web and mobile experience. This may reduce the number of surveys taken, and one way to compensate for that is to show said banner to more users. 

Pros
• Collected insights help marketing, SEO and UX research 
• Insights help with holistic aspects of content, site and product design, even outside the boundaries of customer service 
Cons
• Can be seen as intrusive (particularly if not used in edge banner format) 
• Contradicts research that discourages presenting surveys on the first page visited 
• No information or account info about the users presented with this survey, so they may be returning, and/or have been prompted before 

Questions 
• How did you first hear about Activision Support? (bullets, and open field) 
• What was your first impression when you entered the Activision Support website?


Targeted Survey 2: Mid-Browsing

Randomly presented after the user has scrolled at least 75% of a Knowledge Base article, Product page, or other known support content page, like Contact Us. 
Can be introduced as a corner/edge banner, less disruptive to the web and mobile experience. This may reduce the number of surveys taken, which can be countered by showing said banner to more users, or providing users with an incentive to complete it. 

Pros 
• Collected insights help content research, and improve effort, satisfaction and net promoter scores 
• Provide crucial insights for improving conversion rates 
Cons 
• Can be seen as intrusive (particularly if not used in edge banner format) 
• No information or account info about the users presented with this survey, so they may be returning, and/or have been prompted before 

Questions 
• Did this article/page answer all your questions? (Yes/No) 
• Is there something missing on this page? (Free text) 
• What would you like us to write about next? (Free text)


Targeted Survey 3: New Account

Presented after user has registered an Activision account for the first time. Can be introduced with popup/interstitial. 

Pros
• Collected insights help marketing and UX research 
• Data can be tied to the account for specific content/engagement/support targeting 
• Insights help with holistic aspects of content and brand design 
Cons
• Can disrupt the new registration experience 

Questions
• What makes Activision stand out from other brands? (bullet options, input field) 
• Which of the following tools do you use? (multiple checkbox options, referring to support site tools and other 3rd party gaming utilities and sites like DIM, PlayStation, Xbox Live, etc.)


Targeted Survey 4: Returning User

Presented after registered user (with some account history) logs in again. Can be introduced through popup/interstitial. 

Pros
• Collected insights help website UX research and development 
• Data can be tied to the account for specific content/engagement/support targeting 
• Insights like Net Promoter score are more reliable if asked to known returning users Cons 
• Can disrupt the registered user experience 

Questions 
• What is Activision Support’s most important feature to you? (bullet options, text input) 
• What’s one of the most important features Activision Support should add? (text input) 
• Net Promoter score question 
• What do you like most/least about Activision Support? (respective drop down selections for most/least liked aspects/features, with text input option)


Targeted Survey 5: Exit

Can use current site survey behavior (load new tab), or with Adobe Target’s “Lightbox with Exit Intent”. Higher rate of completion indicates this survey can be slightly longer than other targeted surveys. 

Pros 
• High response rates, often outperforming other kinds of website surveys 
• Collected insights help research and reduce bounce and exit rates 
• Provide crucial insights for improving conversion rates 
Cons 
• It may appear when a user doesn’t mean to leave the site, and merely changes tab 
• Results can be biased as some visitors see it as an opportunity to express their frustration – they are more motivated to participate in a survey than those who are happy or neutral about their experience 

Starting Question 
• Did you meet the goal of your visit? (yes, no) 

Questions 
• “Activision made it easy for me to handle my issue”, Strongly agree – Strongly disagree (CES, Effort) 
• How would you rate your experience with Activision Support? (CSAT) 
• What was your goal? (bullet options, free text field) 
• What made you leave the Activision Support website? (bullet options, free text field) 
• What are your 3 favorite blogs or websites? (3 free text fields)


Design Proposal​​​​​​​
This new design relies on multiple short surveys with different combinations of questions, which means we’ll keep track of individual user journeys, and differentiate answers to questions submitted at different points of the site. 
It also features the ability to give feedback at-will from most site locations, which requires tracking answers given through random popup takeover, and differentiate them from answers submitted voluntarily using the Feedback tab (frustrated/angry users tend to use Feedback features as last resort to make themselves heard, so we’ll compensate mathematically for this difference). 
In the end, the wealth of information provided by both methods should make a significant positive difference in actionable feedback volume. 
We’ll also get rid of “required fields”, and make logic tweaks so the form pushes partially-filled information to the server at intervals – which means people won’t even have to finish filling out their survey before we can capture some of the answers (the first ones are always the most crucial anyway). 
Text blocks are kept to a minimum – but the subtle addition of a timer is a reliable gamification pattern, which triggers competitiveness behaviors, improving mental focus and task completion rate. In this case, it should slightly raise the probability of us receiving more completed surveys.

Mid-Browsing Survey
It’s important for the experience to present a Thank You message after submitting.
Optionally we could use the space to promote a much longer “Full” survey, which would take a few minutes of our time. In this scenario, an incentive (chance to win digital rewards in this case) is used to get people to think of the survey as a fun activity and opportunity, instead of a daunting chore.

Returning User Survey
Exit Survey


Additional References and Resources

Best Practices in Questionnaire Design – Hanover Research Report https://www.gssaweb.org/wp-content/uploads/2015/04/Best-Practices-in-Questionnaire-Design-1.pdf https://www.oecd-ilibrary.org/docserver/9789264167179-6-en.pdf https://www.theacsi.org/about-acsi/the-science-of-customer-satisfaction https://www.nngroup.com/articles/satisfaction-vs-performance-metrics/ https://www.nngroup.com/articles/keep-online-surveys-short/ https://www.nngroup.com/articles/qualitative-surveys/ https://www.nngroup.com/articles/which-ux-research-methods/ https://www.hotjar.com/blog/analyze-open-ended-questions/ https://www.hotjar.com/blog/website-survey-questions/ https://www.hotjar.com/blog/customer-satisfaction-survey/ https://www.surveymonkey.com/mp/customer-feedback-guide/ https://www.surveymonkey.com/curiosity/10-online-survey-tips/ https://www.surveymonkey.com/mp/survey-guidelines/ https://survicate.com/website-survey/exit/ https://survicate.com/website-survey/questions/ https://survicate.com/customer-satisfaction/survey/ https://blog.hubspot.com/service/how-to-measure-customer-satisfaction https://blog.hubspot.com/service/customer-satisfaction-survey-examples https://blog.hubspot.com/service/customer-satisfaction-score https://www.qualtrics.com/experience-management/customer/what-is-csat/ https://www.qualtrics.com/blog/10-tips-for-building-effective-surveys/ https://zapier.com/learn/forms-surveys/writing-effective-survey/ https://www.checkmarket.com/blog/csat-ces-nps-compared/ http://digital.mobius.one/2017/02/24/net-promoter-score-nps-customer-effort-score-ces/ https://panorama-www.s3.amazonaws.com/files/survey-resources/checklist.pdf https://www.snapsurveys.com/blog/25-ways-increase-survey-response-rates/ https://www.callcentrehelper.com/how-to-calculate-customer-satisfaction-csat-109557.htm https://www.nextiva.com/blog/survey-best-practices.html
Activision User Surveys
Published:

Owner

Activision User Surveys

Published: